Fully Projection-Free Proximal Stochastic Gradient Method With Optimal Convergence Rates
نویسندگان
چکیده
منابع مشابه
An Effective Gradient Projection Method for Stochastic Optimal Control
In this work, we propose a simple yet effective gradient projection algorithm for a class of stochastic optimal control problems. The basic iteration block is to compute gradient projection of the objective functional by solving the state and co-state equations via some Euler methods and by using the Monte Carlo simulations. Convergence properties are discussed and extensive numerical tests are...
متن کاملA decentralized proximal-gradient method with network independent step-sizes and separated convergence rates
This paper considers the problem of decentralized optimization with a composite objective containing smooth and non-smooth terms. To solve the problem, a proximal-gradient scheme is studied. Specifically, the smooth and nonsmooth terms are dealt with by gradient update and proximal update, respectively. The studied algorithm is closely related to a previous decentralized optimization algorithm,...
متن کاملConvergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the s...
متن کاملA Proximal Stochastic Gradient Method with Progressive Variance Reduction
We consider the problem of minimizing the sum of two convex functions: one is the average of a large number of smooth component functions, and the other is a general convex function that admits a simple proximal mapping. We assume the whole objective function is strongly convex. Such problems often arise in machine learning, known as regularized empirical risk minimization. We propose and analy...
متن کاملOptimal Rates for Learning with Nyström Stochastic Gradient Methods
In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nyström subsampling, allowing multiple passes over the data and mini-batches. Generalization error bounds for the studied algorithm are provided. Particularly, optimal learning rates are derived considering different possible choices of the step-size, the mini-batch size, the numbe...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2020
ISSN: 2169-3536
DOI: 10.1109/access.2020.3019885